The Minimum Variance Unbiased Estimator

نویسندگان

  • Clayton Scott
  • Rob Nowak
چکیده

This module motivates and introduces the minimum variance unbiased estimator (MVUE). This is the primary criterion in the classical (frequentist) approach to parameter estimation. We introduce the concepts of mean squared error (MSE), variance, bias, unbiased estimators, and the bias-variance decomposition of the MSE. The Minimum Variance Unbiased Estimator 1 In Search of a Useful Criterion In parameter estimation, we observe an N -dimensional vector X of measurements. The distribution of X is governed by a density or probability mass function f θ (x ), which is parameterized by an unknown parameter θ. We would like to establish a useful criterion for guiding the design and assessing the quality of an estimator ˆ θ (x). We will adopt a classical (frequentist) view of the unknown parameter: it is not itself random, it is simply unknown. One possibility is to try to design an estimator that minimizes the mean-squared error, that is, the expected squared deviation of the estimated parameter value from the true parameter value. For a scalar parameter, the MSE is de ned by MSE ( θ̂, θ ) = E [( ˆ θ (x)− θ )2] (1) For a vector parameter θ, this de nition is generalized by MSE ( θ̂, θ ) = E [( ˆ θ (x)− θ )T ( ˆ θ (x)− θ)] (2) The expectation is with respect to the distribution of X. Note that for a given estimator, the MSE is a function of θ. While the MSE is a perfectly reasonable way to assess the quality of an estimator, it does not lead to a useful design criterion. Indeed, the estimator that minimizes the MSE is simply the estimator ˆ θ (x) = θ (3) ∗http://creativecommons.org/licenses/by/1.0 http://cnx.org/content/m11426/latest/ Connexions module: m11426 2 Unfortunately, this depends on the value of the unknown parameter, and is therefore not realizeable! We need a criterion that leads to a realizeable estimator. Note: In the Bayesian Approach to Parameter Estimation1, the MSE is a useful design rule. 2 The Bias-Variance Decomposition of the MSE It is possible to rewrite the MSE in such a way that a useful optimality criterion for estimation emerges. For a scalar parameter θ, [Insert 1] This expression is called the bias-variance decomposition of the mean-squared error. The rst term on the right-hand side is called the variance of the estimator, and the second term on the right-hand side is the square of the bias of the estimator. The formal de nition of these concepts for vector parameters is now given: Let θ̂ be an estimator of the parameter θ. De nition 1: variance The variance of θ̂ is [Insert 2] De nition 2: bias The bias of θ̂ is [Insert 3] The bias-variance decomposition also holds for vector parameters: [Insert 4] The proof is a straighforward generalization of the argument for the scalar parameter case. Exercise 1: Prove the bias-variance decomposition of the MSE for the vector parameter case. 3 The Bias-Variance Tradeo The MSE decomposes into the sum of two non-negative terms, the squared bias and the variance. In general, for an arbitrary estimator, both of these terms will be nonzero. Furthermore, as an estimator is modi ed so that one term increases, typically the other term will decrease. This is the so-called bias-variance tradeo . The following example illustrates this e ect. Example 1: Let à = α 1 N ∑N n=1 xn, where xn = A+wn, wn ∼ N ( 0, σ ) , and α is an arbitrary constant. Let's nd the value of α that minimizes the MSE. MSE ( à ) = E [( Ã−A )2] (4) note: à = αSN , SN ∼ N ( A, σ 2 N ) MSE ( à ) = E [ à ] − 2E [ à ] A + A = αE [ 1 N2 ∑N ij=1 (xixj) ] − 2αE [ 1 N ∑N n=1 xn ] A + A = α 1 N2 ∑N ij=1 (E [xixj ])− 2α 1 N ∑N n=1 (E [xn]) +A 2 (5) http://cnx.rice.edu/content//latest/ http://cnx.org/content/m11426/latest/ Connexions module: m11426 3 E [xixj ] = { A + σ if i = j A if i 6= j MSE ( à ) = α ( A + σ 2 N ) − 2αA + A = α 2σ2 N + (α− 1) 2 A (6) σ ( à )2 = ασ N Bias ( à ) = (α− 1)A2 ∂ ∂α MSE ( à ) = 2ασ N + 2 (α− 1)A = 0 α∗ = A A2 + σ2 N (7) The optimal value α∗ dpends on the unknown parameter A! Therefore the estimator is not realizable. Note that the problematic dependence on the parameter enters through the Bias component of the MSE. Therefore, a reasonable alternative is to constrain the estimator to be unbiased, and then nd the estimator that produces the minimum variance (and hence provides the minimum MSE among all unbiased estimators). note: Sometimes no unbiased estimator exists, and we cannot proceed at all in this direction. In this example, note that as the value of α varies, one of the squared bias or variance terms increases, while the other one decreases. Futhermore, note that the dependence of the MSE on the unknown parameter is manifested in the bias. 4 Unbiased Estimators Since the bias depends on the value of the unknown parameter, it seems that any estimation criterion that depends on the bias would lead to an unrealizable estimator, as the previous example (Example 1) suggests (although in certain cases realizable minimum MSE estimators can be found). As an alternative to minimizing the MSE, we could focus on estimators that have a bias of zero. In this case, the bias contributes zero to the MSE, and in particular, it does not involve the unknown parameter. By focusing on estimators with zero bias, we may hope to arrive at a design criterion that yields realizable estimators. De nition 3: unbiased An estimator θ̂ is called unbiased if its bias is zero for all values of the unknown parameter. Equivalently, [Insert 5] For an estimator to be unbiased we require that on average the estimator will yield the true value of the unknown parameter. We now give some examples. The sample mean of a random sample is always an unbiased estimator for the mean. Example 2: Estimate the DC level in the Guassian white noise. Suppose we have data x1, . . . , xN and model the data by ∀n, n ∈ {1, . . . , N} : xn = A + wn http://cnx.org/content/m11426/latest/ Connexions module: m11426 4 where A is the unknown DC level, and wn ∼ N ( σ, σ ) . The parameter is −∞ < A <∞. Consider the sample-mean estimator:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Chan Lecture 6 : Minimum Variance Unbiased Estimators ( LaTeX prepared by Ben Vondersaar ) April 27 , 2015

An estimator is said to be unbiased if b(θ̂) = 0. If multiple unbiased estimates of θ are available, and the estimators can be averaged to reduce the variance, leading to the true parameter θ as more observations are available. Placing the unbiased restriction on the estimator simplifies the MSE minimization to depend only on its variance. The resulting estimator, called the Minimum Variance Unb...

متن کامل

The Ratio-type Estimators of Variance with Minimum Average Square Error

The ratio-type estimators have been introduced for estimating the mean and total population, but in recent years based on the ratio methods several estimators for population variance have been proposed. In this paper two families of estimators have been suggested and their approximation mean square error (MSE) have been developed. In addition, the efficiency of these variance estimators are com...

متن کامل

Minimum-variance Pseudo-unbiased Reduced-rank Estimator (mv-pure) and Its Applications to Ill-conditioned Inverse Problems

This paper presents mathematically novel estimator for the linear regression model named Minimum-Variance Pseudo-Unbiased Reduced-Rank Estimator (MV-PURE), designed specially for applications where the model matrix under consideration is ill-conditioned, and auxiliary knowledge on the unknown deterministic parameter vector is available in the form of linear constraints. We demonstrate closed al...

متن کامل

When the Cramér-rao Inequality Provides No Information

We investigate a one-parameter family of probability densities (related to the Pareto distribution, which describes many natural phenomena) where the Cramér-Rao inequality provides no information. 1. Cramér-Rao Inequality One of the most important problems in statistics is estimating a population parameter from a finite sample. As there are often many different estimators, it is desirable to be...

متن کامل

The Structure of Bhattacharyya Matrix in Natural Exponential Family and Its Role in Approximating the Variance of a Statistics

In most situations the best estimator of a function of the parameter exists, but sometimes it has a complex form and we cannot compute its variance explicitly. Therefore, a lower bound for the variance of an estimator is one of the fundamentals in the estimation theory, because it gives us an idea about the accuracy of an estimator. It is well-known in statistical inference that the Cram&eac...

متن کامل

ESTIMATORS BASED ON FUZZY RANDOM VARIABLES AND THEIR MATHEMATICAL PROPERTIES

In statistical inference, the point estimation problem is very crucial and has a wide range of applications. When, we deal with some concepts such as random variables, the parameters of interest and estimates may be reported/observed as imprecise. Therefore, the theory of fuzzy sets plays an important role in formulating such situations. In this paper, we rst recall the crisp uniformly minimum ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006